# This is a so-called "R chunk" where you can write R code.
date()
## [1] "Mon Nov 28 16:44:28 2022"
The first week was somewhat heavy for me as I was ill and also a rookie with R. Hopefully things will start running more smoothly when getting more familiar with the language and learning to code on my own. Reading “the R for Health Data Science” and checking the code in RStudio simultaneously did not work too well for me. I felt it broke down the though process, even when the same things are presented in both. I think for me only the reading version would have been enough at this point as it describes the code also. I still get the point of having the RStudio version on the side to see how errors and such look like.
I find the material useful in getting to know R. For example the plotting part is excellent when learning how to draw nice plots. It also saves a lot of time when there is a material to guide one through the basics rather than starting by googling somewhat randomly to find the basic options. (Tried to learn something else by the latter method, not really advisable)
When I was looking for a course to study R, my supervisor recommended this course. My first impression of this course is, that this will be really useful indeed.
date()
## [1] "Mon Nov 28 16:44:28 2022"
First, read the data into R and inspect it.
# access libraries that will be needed at exercise 2
library(tidyverse)
## Warning: package 'tidyverse' was built under R version 4.2.2
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.2 ──
## ✔ ggplot2 3.4.0 ✔ purrr 0.3.5
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.4.1
## ✔ readr 2.1.3 ✔ forcats 0.5.2
## Warning: package 'tibble' was built under R version 4.2.2
## Warning: package 'tidyr' was built under R version 4.2.2
## Warning: package 'readr' was built under R version 4.2.2
## Warning: package 'purrr' was built under R version 4.2.2
## Warning: package 'dplyr' was built under R version 4.2.2
## Warning: package 'stringr' was built under R version 4.2.2
## Warning: package 'forcats' was built under R version 4.2.2
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
library(ggplot2)
library(GGally)
## Warning: package 'GGally' was built under R version 4.2.2
## Registered S3 method overwritten by 'GGally':
## method from
## +.gg ggplot2
# let's have a look at the data
students2014 <- read.csv(file = '\\\\ad.helsinki.fi/home/h/hkonstar/Desktop/IDOS/IODS-project/data/exp2data.csv')
str(students2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : chr "F" "M" "F" "M" ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
head(students2014)
## gender age attitude deep stra surf points
## 1 F 53 3.7 3.583333 3.375 2.583333 25
## 2 M 55 3.1 2.916667 2.750 3.166667 12
## 3 F 49 2.5 3.500000 3.625 2.250000 24
## 4 M 53 3.5 3.500000 3.125 2.250000 10
## 5 M 49 3.7 3.666667 3.625 2.833333 22
## 6 F 38 3.8 4.750000 3.625 2.416667 21
# if you prefer you may use the following instead or in addition
# glimpse(students2014)
This dataset consists of 7 variables and 166 observations.
Variable discriptions:
gender = F for female, M for male
age = age in years
attitude = points as mean for attitudes toward statistics
deep = points as mean for items related with deep approach to studying
stra = points as mean for items related with strategic approach to studying
surf = points as mean for items related with surface approach to studying
points = exam points
Visualize the data so that gender is distinguished. Use ggpairs to create matrix of plots from the variables in the dataset.
# create a plot matrix with ggpairs()
p <- ggpairs(students2014, mapping = aes(col = gender, alpha = 0.3), lower = list(combo = wrap("facethist", bins = 20)))
# draw the plot
p
# show summaries for variables in students2014
summary(students2014)
## gender age attitude deep
## Length:166 Min. :17.00 Min. :1.400 Min. :1.583
## Class :character 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333
## Mode :character Median :22.00 Median :3.200 Median :3.667
## Mean :25.51 Mean :3.143 Mean :3.680
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083
## Max. :55.00 Max. :5.000 Max. :4.917
## stra surf points
## Min. :1.250 Min. :1.583 Min. : 7.00
## 1st Qu.:2.625 1st Qu.:2.417 1st Qu.:19.00
## Median :3.188 Median :2.833 Median :23.00
## Mean :3.121 Mean :2.787 Mean :22.72
## 3rd Qu.:3.625 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :5.000 Max. :4.333 Max. :33.00
Visual inspection of the data revealed the following:
There is about double the amount of females in the data in respect to males. Scatterplot for points with attitudes indicates a linear interaction. Also for deep and stra there seems to be linear interaction.
Gender distribution for variables age, deep, and points seem similar. For attitude male distribution is more towards higher scores as for females it reminds more of normal distribution. For stra the case is the opposite for attitude distribution: females are having higher score. For surf females have sharp peaking normal distribution where males are leaning towards lower scores.
Numerical summaries for variables:
Following listing will have 1st and 3rd quadrants and median values for stated variable.
age: 21.0 - 27.0, mean 25.5
attitude: 2.6 - 3.1, median 3.2
deep: 3.3 - 4.1, median 3.7
stra: 2.6 - 3.6, median 3.2
surf: 2.4 - 3.2, median 2.8
points: 19.0 - 27.8, median 23.0
These state the same information as the graph functions.
Surf correlate negatively with attitude, deep, and stra. This means that when surf increases the other one decreases. When gender is taken into account stra is not correlated for males nor females. Attitude and deep are correlated negatively with surf only for male gender.
Points correlate positively with attitude meaning that points and attitude increase together. When gender is taken into account correlation is significant both for males and females.
First, select three variables which correlate highest with the points to be used as explanatory variables. Those are: attitude, stra, and surf.
Create a plot matrix and then a regression model having multiple explanatory variables.
# create an plot matrix with ggpairs()
ggpairs(students2014, lower = list(combo = wrap("facethist", bins = 20)))
# create a regression model with multiple explanatory variables
multiple_variables <- lm(points ~ attitude + stra + surf, data = students2014)
# print out a summary of the model
summary(multiple_variables)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
Inspecting the results show that points correlate in statistically significant manner only with variable attitude. Let’s remove the variables stra and surf from the model and use simple regression for variable attitude. Will do simple regression for attitude.
# a scatter plot of points versus attitude
qplot(attitude, points, data = students2014) + geom_smooth(method = "lm")
## Warning: `qplot()` was deprecated in ggplot2 3.4.0.
## `geom_smooth()` using formula = 'y ~ x'
# fit a linear model
linear_attitude <- lm(points ~ attitude, data = students2014)
# print out a summary of the model
summary(linear_attitude)
##
## Call:
## lm(formula = points ~ attitude, data = students2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.6372 1.8303 6.358 1.95e-09 ***
## attitude 3.5255 0.5674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
Inspecting the results show that adjusted R-squared is 0.19 indicating that attitude can explain about 19 % of the change in points.
Now, to check the assumptions we’ll produce diagnostics plots: Residuals vs Fitted values, Normal QQ-plot and Residuals vs Leverage.
# draw diagnostic plots using the plot() function. Choose the plots 1, 2 and 5; Residuals vs Fitted values, Normal QQ-plot, and Residuals vs Leverage
plot(linear_attitude, which = c(1,2,5))
Residuals vs Fitted plot
The line stays close to 0 indicating that the model works nicely for points up to 24 and less well for the points in the higher end. Observations 35, 56, and 145 are suggested as potential outliers.
Normal Q-Q plot
This probability plot indicates there is a linear connection and that the model fits nicely. For the low and high end the model fit is not. Observations 35, 56, and 145 are marked as possible outliers.
Residuals vs Leverage plot
The line stays quite close to 0, which indicates there are no major outliers in the data. Observations 35, 56, and 71 are marked as potential outliers.
Linear model is expected to have a good fit on the data. These diagnostic plots support the choice of linear model.
Read the data from csv file to R and check it looks fine.
#access libraries to be needed
library(boot)
## Warning: package 'boot' was built under R version 4.2.2
library(readr)
library(tidyverse)
library(dplyr)
library(GGally)
library(ggplot2)
library(finalfit)
## Warning: package 'finalfit' was built under R version 4.2.2
# read the data into a data frame
alc <- read.csv(file = "\\\\ad.helsinki.fi/home/h/hkonstar/Desktop/IDOS/IODS-project/data/alc.csv")
# print variable names
names(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "guardian" "traveltime" "studytime" "schoolsup"
## [16] "famsup" "activities" "nursery" "higher" "internet"
## [21] "romantic" "famrel" "freetime" "goout" "Dalc"
## [26] "Walc" "health" "failures" "paid" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
There are 370 observations and 35 variables in the data. Data set consists of survey data from students who have completed two different surveys, math and por. Data includes information about alcohol consumption, school grades and other school related items like time spent studying, and gemographic variables like parental education.
I will study relationship between high/low alcohol consumption and other variables that are: absences, studytime, failures, and G3.
High_use will correlate positively with absences (meaning that high alcohol usage is related with more absences)
High_use will correlate negatively with famrel (meaning that high alcohol usage is related with worse family relationships)
High_use will correlate negatively with health (meaning that high alcohol usage is related to worse health status)
High_use will correlate negatively with final grade (G3) (meaning that high alcohol usage is related with worse final grades)
# initialize a plot of high_use and absences
gabs <- ggplot(alc, aes(x = high_use, y = absences))
# define the plot as a box plot and draw it
gabs + geom_boxplot(aes(col=sex)) + ggtitle("Student absences by alcohol consumption and sex")
summary(alc$absences)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.000 1.000 3.000 4.511 6.000 45.000
# initialize a plot of high_use and quality of family relationships
gfam <- ggplot(alc, aes(x = high_use, y = famrel))
# define the plot as a box plot and draw it
gfam + geom_boxplot(aes(col=sex)) + ggtitle("Family relationship quality by alcohol consumption and sex")
summary(alc$famrel)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.000 4.000 4.000 3.935 5.000 5.000
# initialize a plot of high_use and health
ghlth <- ggplot(alc, aes(x = high_use, y = health))
# define the plot as a box plot and draw it
ghlth + geom_boxplot(aes(col=sex)) + ggtitle("Health by alcohol consumption and sex")
summary(alc$health)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 1.000 3.000 4.000 3.562 5.000 5.000
# initialize a plot of high_use and final grade (G3)
gfg <- ggplot(alc, aes(x = high_use, y = G3))
# define the plot as a box plot and draw it
gfg + geom_boxplot(aes(col=sex)) + ylab("final grade") + ggtitle("Final grade by alcohol consumption and sex")
summary(alc$G3)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.00 10.00 12.00 11.52 14.00 18.00
# tabulate data
dependent <- "high_use"
explanatory <- c("absences", "famrel", "health", "G3")
alc %>%
summary_factorlist(dependent, explanatory, p = TRUE,
add_dependent_label = TRUE)
## Dependent: high_use FALSE TRUE p
## absences Mean (SD) 3.7 (4.5) 6.4 (7.1) <0.001
## famrel Mean (SD) 4.0 (0.9) 3.8 (0.9) 0.019
## health Mean (SD) 3.5 (1.4) 3.7 (1.4) 0.134
## G3 Mean (SD) 11.8 (3.4) 10.9 (3.0) 0.011
Absences seem to increase slightly according to box plot. According to tabulation there is a connection. This supports my hypothesis.
According to both box plot and tabulation, high alcohol usage is connected with poorer family relationships. This is in line with my hypothesis.
For health, for box plot and tabulation show there is no difference. This did not support my hypothesis that health would decrease with high alcohol usage.
Final grade seems to decrease a bit with high alcohol use according to both, box plot and tabulation. This is aligned with my hypothesis.
I will start with a model including all four variables (even though it will be unneccary due to tabulation results). After that I will delete variables one by one, checking for AIC in between to see which model to choose and where to stop.
# find the model with glm()
m <- glm(high_use ~ absences + famrel + health + G3, data = alc, family = "binomial")
# print out a summary of the model
summary(m)
##
## Call:
## glm(formula = high_use ~ absences + famrel + health + G3, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3498 -0.8206 -0.6820 1.1424 1.9666
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.12490 0.72091 0.173 0.862450
## absences 0.08193 0.02269 3.612 0.000304 ***
## famrel -0.27919 0.12753 -2.189 0.028582 *
## health 0.13687 0.08806 1.554 0.120138
## G3 -0.06840 0.03615 -1.892 0.058439 .
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 423.36 on 365 degrees of freedom
## AIC: 433.36
##
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m)
## (Intercept) absences famrel health G3
## 0.12490199 0.08193324 -0.27918935 0.13686832 -0.06840295
# compute odds ratios (OR)
OR <- coef(m) %>% exp()
# compute confidence intervals (CI)
CI <- confint(m)
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR, CI)
## OR 2.5 % 97.5 %
## (Intercept) 1.1330374 -1.29609093 1.539770832
## absences 1.0853833 0.03949360 0.128701401
## famrel 0.7563967 -0.53111331 -0.029290043
## health 1.1466771 -0.03356517 0.312503540
## G3 0.9338841 -0.13975665 0.002411183
Let’s remove health from the explaining variables.
# find the model with glm()
m2 <- glm(high_use ~ absences + famrel + G3, data = alc, family = "binomial")
# print out a summary of the model
summary(m2)
##
## Call:
## glm(formula = high_use ~ absences + famrel + G3, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.2678 -0.8248 -0.6893 1.1897 2.0384
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 0.59749 0.65204 0.916 0.359486
## absences 0.08132 0.02276 3.574 0.000352 ***
## famrel -0.25491 0.12550 -2.031 0.042234 *
## G3 -0.07451 0.03574 -2.085 0.037091 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 425.82 on 366 degrees of freedom
## AIC: 433.82
##
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m2)
## (Intercept) absences famrel G3
## 0.59749109 0.08131699 -0.25491306 -0.07451065
# compute odds ratios (OR)
OR2 <- coef(m2) %>% exp()
# compute confidence intervals (CI)
CI2 <- confint(m2)
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR2, CI2)
## OR2 2.5 % 97.5 %
## (Intercept) 1.8175530 -0.68389274 1.882796853
## absences 1.0847147 0.03882378 0.128213991
## famrel 0.7749839 -0.50246478 -0.008594584
## G3 0.9281976 -0.14512892 -0.004553643
AIC is slightly worse with this model, but as the explaining variables all become significant and it is better to have fewer explanatory variables, I will continue with this model.
Explanatory model confidence intervals do not cross 0, which also indicates they are statistically significant. Just to be sure this model is the best fit, I will test a model without famrel.
# find the model with glm()
m3 <- glm(high_use ~ absences + G3, data = alc, family = "binomial")
# print out a summary of the model
summary(m3)
##
## Call:
## glm(formula = high_use ~ absences + G3, family = "binomial",
## data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -2.3286 -0.8298 -0.7219 1.2113 1.9242
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -0.38732 0.43500 -0.890 0.373259
## absences 0.08423 0.02309 3.648 0.000264 ***
## G3 -0.07606 0.03553 -2.141 0.032311 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 452.04 on 369 degrees of freedom
## Residual deviance: 429.93 on 367 degrees of freedom
## AIC: 435.93
##
## Number of Fisher Scoring iterations: 4
# print out the coefficients of the model
coef(m3)
## (Intercept) absences G3
## -0.38731815 0.08422737 -0.07605893
# compute odds ratios (OR)
OR3 <- coef(m3) %>% exp()
# compute confidence intervals (CI)
CI3 <- confint(m3)
## Waiting for profiling to be done...
# print out the odds ratios with their confidence intervals
cbind(OR3, CI3)
## OR3 2.5 % 97.5 %
## (Intercept) 0.6788751 -1.25260555 0.459482521
## absences 1.0878762 0.04110528 0.131708308
## G3 0.9267616 -0.14621889 -0.006471056
AIC is higher for this than for the second model. We will continue with the model having absences + famrel + G3 as explanatory variables.
# tabulate data
dependent <- "high_use"
explanatory2 <- c("absences", "famrel", "G3")
alc %>%
summary_factorlist(dependent, explanatory2, p = TRUE,
add_dependent_label = TRUE)
## Dependent: high_use FALSE TRUE p
## absences Mean (SD) 3.7 (4.5) 6.4 (7.1) <0.001
## famrel Mean (SD) 4.0 (0.9) 3.8 (0.9) 0.019
## G3 Mean (SD) 11.8 (3.4) 10.9 (3.0) 0.011
In this model, increase in school absences will add the likely hood for high alcohol use. Increase in famrel, that is better family relationships, and in G3 (final grade) will decrease the likely hood for high alcohol use.
These results do support my hypothesis that school absences would be positively related to high alcohol use. In line with hypothesis famrel and G3 were negatively related with high alcohol use. Experienced health was not related to alcohol usage, as I expected.
Use the model 2 having absences, famrel, and G3 as predictors.
# predict() the probability of high_use
probabilities <- predict(m2, type = "response")
# add the predicted probabilities to 'alc'
alc <- mutate(alc, probability = probabilities)
# use the probabilities to make a prediction of high_use
alc <- mutate(alc, prediction = high_use)
# see the last ten original classes, predicted probabilities, and class predictions
select(alc, failures, absences, sex, high_use, probability, prediction) %>% tail(10)
## failures absences sex high_use probability prediction
## 361 0 3 M FALSE 0.2410652 FALSE
## 362 1 0 M FALSE 0.3959997 FALSE
## 363 1 7 M TRUE 0.4361317 TRUE
## 364 0 1 F FALSE 0.1839390 FALSE
## 365 0 6 F FALSE 0.3704366 FALSE
## 366 1 2 F FALSE 0.2917307 FALSE
## 367 0 2 F FALSE 0.2398221 FALSE
## 368 0 3 F FALSE 0.5716255 FALSE
## 369 0 4 M TRUE 0.3645417 TRUE
## 370 0 2 M TRUE 0.2680314 TRUE
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 259 0
## TRUE 0 111
# initialize a plot of 'high_use' versus 'probability' in 'alc'
g <- ggplot(alc, aes(x = alc$probability, y = alc$high_use))
# define the geom as points and draw the plot
g + geom_point(aes(col = alc$prediction))
## Warning: Use of `alc$prediction` is discouraged.
## ℹ Use `prediction` instead.
## Warning: Use of `alc$probability` is discouraged.
## ℹ Use `probability` instead.
## Warning: Use of `alc$high_use` is discouraged.
## ℹ Use `high_use` instead.
# tabulate the target variable versus the predictions
table(high_use = alc$high_use, prediction = alc$prediction) %>% prop.table() %>% addmargins()
## prediction
## high_use FALSE TRUE Sum
## FALSE 0.7 0.0 0.7
## TRUE 0.0 0.3 0.3
## Sum 0.7 0.3 1.0
# define a loss function (mean prediction error)
loss_func <- function(class, prob) {
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
# call loss_func to compute the average number of wrong predictions in the (training) data
loss_func(class = alc$probability, prob = 0)
## [1] 0.07567568
It is visible in the plot that low alcohol use is more likely than high alcohol usage. Cross tabulations of predictions vs actual values show that 70 % of the observations are low alcohol usage and the rest 30 % high alcohol usage. Predictions and actual values have equal proportions.
I will be using the Boston data set from late 1970s, which is about Housing values in suburbs of Boston. First the data set needs to uploaded, it is included in the MASS library.
# access the MASS package
library(MASS)
## Warning: package 'MASS' was built under R version 4.2.2
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
# load the data
data("Boston")
# explore the dataset
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
Data consists of 506 observations and 14 variables. Variables are mostly numerical with 2 integer variables.
Variables include such as crime rate, proportion of owner-occupied old buildings, location related to river, percentage of the lower status population etc.
#access needed libraries
library(ggplot2)
library(dplyr)
library(tidyr)
library(tidyverse)
library(GGally)
# create plot matrixes with ggpairs()
p <- ggpairs(Boston[,1:5], lower = list(combo = wrap("facethist", bins = 20)))
p2 <- ggpairs(Boston[,6:10], lower = list(combo = wrap("facethist", bins = 20)))
# draw the plots
p
p2
summary(Boston)
## crim zn indus chas
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000
## 1st Qu.: 0.08205 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000
## nox rm age dis
## Min. :0.3850 Min. :3.561 Min. : 2.90 Min. : 1.130
## 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02 1st Qu.: 2.100
## Median :0.5380 Median :6.208 Median : 77.50 Median : 3.207
## Mean :0.5547 Mean :6.285 Mean : 68.57 Mean : 3.795
## 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08 3rd Qu.: 5.188
## Max. :0.8710 Max. :8.780 Max. :100.00 Max. :12.127
## rad tax ptratio black
## Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32
## 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38
## Median : 5.000 Median :330.0 Median :19.05 Median :391.44
## Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67
## 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23
## Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90
## lstat medv
## Min. : 1.73 Min. : 5.00
## 1st Qu.: 6.95 1st Qu.:17.02
## Median :11.36 Median :21.20
## Mean :12.65 Mean :22.53
## 3rd Qu.:16.95 3rd Qu.:25.00
## Max. :37.97 Max. :50.00
Variable distributions differ greatly. In these plots showing half of the relationships, it seems that most of the variables are correlated with each other.
First, I will standardize the data set and view its summaries.
# center and standardize variables
boston_scaled <- scale(Boston)
# summaries of the scaled variables
summary(boston_scaled)
## crim zn indus chas
## Min. :-0.419367 Min. :-0.48724 Min. :-1.5563 Min. :-0.2723
## 1st Qu.:-0.410563 1st Qu.:-0.48724 1st Qu.:-0.8668 1st Qu.:-0.2723
## Median :-0.390280 Median :-0.48724 Median :-0.2109 Median :-0.2723
## Mean : 0.000000 Mean : 0.00000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.007389 3rd Qu.: 0.04872 3rd Qu.: 1.0150 3rd Qu.:-0.2723
## Max. : 9.924110 Max. : 3.80047 Max. : 2.4202 Max. : 3.6648
## nox rm age dis
## Min. :-1.4644 Min. :-3.8764 Min. :-2.3331 Min. :-1.2658
## 1st Qu.:-0.9121 1st Qu.:-0.5681 1st Qu.:-0.8366 1st Qu.:-0.8049
## Median :-0.1441 Median :-0.1084 Median : 0.3171 Median :-0.2790
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.5981 3rd Qu.: 0.4823 3rd Qu.: 0.9059 3rd Qu.: 0.6617
## Max. : 2.7296 Max. : 3.5515 Max. : 1.1164 Max. : 3.9566
## rad tax ptratio black
## Min. :-0.9819 Min. :-1.3127 Min. :-2.7047 Min. :-3.9033
## 1st Qu.:-0.6373 1st Qu.:-0.7668 1st Qu.:-0.4876 1st Qu.: 0.2049
## Median :-0.5225 Median :-0.4642 Median : 0.2746 Median : 0.3808
## Mean : 0.0000 Mean : 0.0000 Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 1.6596 3rd Qu.: 1.5294 3rd Qu.: 0.8058 3rd Qu.: 0.4332
## Max. : 1.6596 Max. : 1.7964 Max. : 1.6372 Max. : 0.4406
## lstat medv
## Min. :-1.5296 Min. :-1.9063
## 1st Qu.:-0.7986 1st Qu.:-0.5989
## Median :-0.1811 Median :-0.1449
## Mean : 0.0000 Mean : 0.0000
## 3rd Qu.: 0.6024 3rd Qu.: 0.2683
## Max. : 3.5453 Max. : 2.9865
After scaling mean for each variable is 0, in addition variances for different variables became more uniform.
Now, categorical crime variable will be created and categorical break points are created based on quantiles from crim variable stating per capita crime rate by town. Data set is divided to train and test sets, so that 80% of randomly selected observations belong to train set.
# class of the boston_scaled object
class(boston_scaled)
## [1] "matrix" "array"
# this is matrix array, so needs to be changes to to data frame for further analysis
boston_scaled <- as.data.frame(boston_scaled)
# create a quantile vector of crime and print it
bins <- quantile(boston_scaled$crim)
bins
## 0% 25% 50% 75% 100%
## -0.419366929 -0.410563278 -0.390280295 0.007389247 9.924109610
# create a categorical variable 'crime' by using the bins just created as break points
crime <- cut(boston_scaled$crim, breaks = bins, include.lowest = TRUE, label = c("low", "med_low", "med_high", "high"))
# add the new categorical value to scaled data
boston_scaled <- data.frame(boston_scaled, crime)
# remove original crim from the dataset
boston_scaled <- dplyr::select(boston_scaled, -crim)
# Creating random selection of the data, so that 80 % will belong to training set, and the rest 20 % to test set
# number of rows in the Boston data set
n <- nrow(boston_scaled)
# choose randomly 80% of the rows
ind <- sample(n, size = n * 0.8)
# create train set
train <- boston_scaled[ind,]
# create test set
test <- boston_scaled[-ind,]
We’ll do linear discriminant analysis to test how closely all other variables are located to crime variable in this data set. The data set was divided into two sets, a train set to teach the model and to test set to test how the model works. First, we will create the model and plot the results.
# linear discriminant analysis
lda.fit <- lda(crime ~ ., data = train)
# print the lda.fit object
lda.fit
## Call:
## lda(crime ~ ., data = train)
##
## Prior probabilities of groups:
## low med_low med_high high
## 0.2475248 0.2599010 0.2326733 0.2599010
##
## Group means:
## zn indus chas nox rm age
## low 0.93113549 -0.8915545 -0.15421606 -0.8476703 0.4002117 -0.8631220
## med_low -0.05765212 -0.3236842 -0.08484810 -0.6012976 -0.1043122 -0.4126580
## med_high -0.40422275 0.1984000 0.27216352 0.3939099 0.1269322 0.4722837
## high -0.48724019 1.0170492 -0.04735191 1.0611370 -0.3903448 0.8011608
## dis rad tax ptratio black lstat
## low 0.8876579 -0.6867152 -0.7507376 -0.37115650 0.3769102 -0.747306718
## med_low 0.4373103 -0.5410786 -0.4777753 -0.06656289 0.3218798 -0.192263853
## med_high -0.4011540 -0.4467344 -0.3347512 -0.30869116 0.1068762 -0.001888903
## high -0.8470866 1.6388211 1.5145512 0.78158339 -0.7433145 0.916761609
## medv
## low 0.475279353
## med_low 0.006581094
## med_high 0.187404278
## high -0.694053525
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3
## zn 0.07651738 0.68808787 -1.1333836121
## indus 0.09198143 -0.22364665 0.3210045355
## chas -0.03427456 -0.05835293 -0.0003223258
## nox 0.41886546 -0.66234051 -1.2018169837
## rm -0.01522277 -0.09276509 -0.1420126497
## age 0.16073355 -0.43500635 -0.1675307736
## dis -0.05032656 -0.27397472 0.4051816164
## rad 3.57572479 0.96877290 -0.2320020521
## tax 0.08321764 -0.10683853 0.7763769476
## ptratio 0.12830039 0.08884604 -0.3275066834
## black -0.08698591 0.02902095 0.1209544749
## lstat 0.20044616 -0.23298822 0.3077145825
## medv 0.11018334 -0.40640780 -0.1315251145
##
## Proportion of trace:
## LD1 LD2 LD3
## 0.9590 0.0303 0.0107
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 2)
Observations categorized as high seem to cluster together far apart from other variables, even though there are some med_high observations as part of that cluster. Low, med_low, and med_high clusters are partly overlapping, yet they can still be distinguished from each other. Accessibility to radial high ways (variable rad) seems to have the highest impact on crime category “high”.
Further, train model will be used to predict categories in test data.
# save the crime categories from test data
correct_classes <- test$crime
# remove the crime variable from test data
test <- dplyr::select(test, -crime)
# predict classes with test data
lda.pred <- predict(lda.fit, newdata = test)
# cross tabulate the results
table(correct = correct_classes, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 16 10 1 0
## med_low 1 13 7 0
## med_high 1 8 20 3
## high 0 0 0 22
Predictions were most accurate for high category, where everything (23) went to the right category. For med_high about half of the predictions were correct (17), and most mis-categorized were in med_low. For med_low about 2/3 of the predictions were correct (14) and most mis-categorized predictions were in med_high. For low less than half of the predictions were correctly located to low category (12) and almost all other predictions were categorized as med_low. It seems that quite often low and med_high categories are predicted as med_low.
All other variables together are not super good at predicting the crime variable. This is constant with the lda result from train data. Those above mentioned categories are overlapping and hence their prediction is difficult.
# reload the data
data("Boston")
# standardize data
boston_rescaled <- scale(Boston)
Distances between observations are calculated by two different methods: euclidean distance matrix and manhattan distance matrix.
# euclidean distance matrix
dist_eu <- dist(boston_rescaled)
# look at the summary of the distances
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
# manhattan distance matrix
dist_man <- dist(boston_rescaled, method = "manhattan")
# look at the summary of the distances
summary(dist_man)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.2662 8.4832 12.6090 13.5488 17.7568 48.8618
# k-means clustering
km <- kmeans(boston_rescaled, centers = 3)
# plot the Boston dataset with clusters
pairs(boston_rescaled[,1:5], col = km$cluster)
pairs(boston_rescaled[,6:10], col = km$cluster)
Here, I will investigate what is the optimal number of clusters.
set.seed(123)
# determine the number of clusters
k_max <- 8
# calculate the total within sum of squares
twcss <- sapply(1:k_max, function(k){kmeans(boston_rescaled, k)$tot.withinss})
# visualize the results
qplot(x = 1:k_max, y = twcss, geom = 'line')
Investigation shows that 2 is the optimal number for clusters, as there is an edgy turn on the graph. Let’s use that and visualize the clusters.
# k-means clustering
km2 <- kmeans(boston_rescaled, centers = 2)
# plot the Boston dataset with clusters
pairs(boston_rescaled[,1:5], col = km2$cluster)
pairs(boston_rescaled[,6:10], col = km2$cluster)
For some of the variables cluster colors do seems to fit the clusters nicely. For others, clusters are intermixed with each other and hence not so good at separating observations. When this version is compared with the three cluster version, this seems more tidy.
Here, I will plot the train data in 3D. I will plot the data twice, first by using crime categories as coloring and the using k-means clusters for coloring using the same number of clusters to see the effect of coloring strategy on the plot.
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
# Access the plotly package. Create a 3D plot of the columns of the matrix product using the crime classes in train data.
library(plotly)
## Warning: package 'plotly' was built under R version 4.2.2
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = classes)
# Draw similar plot, but use k-means for setting colors.
# remove the crime variable from train data to run k-means
# use centers = 4 to compare the effect of k-means vs quantile in setting colors
trainkm <- dplyr::select(train, -crime)
kmtrain <- kmeans(trainkm, centers = 4)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = kmtrain$cluster)
Observations are located similarly in both plots. There is some difference in the coloring of the clusters. Here, I used same number of clusters for both to compare the effect of different clustering method. To summarize, observations are the same in both plots, yet coloring of them is different as quantile is set by the probability of those observations in belonging to particular fraction. K-mean on the other hand tries to minimize the mean distance between geometric points, that is, it is more sensitive for the composition of the data.